487 research outputs found

    A neural model for the visual tuning properties of action-selective neurons

    Get PDF
    SUMMARY: The recognition of actions of conspecifics is crucial for survival and social interaction. Most current models on the recognition of transitive (goal-directed) actions rely on the hypothesized role of internal motor simulations for action recognition. However, these models do not specify how visual information can be processed by cortical mechanisms in order to be compared with such motor representations. This raises the question how such visual processing might be accomplished, and in how far motor processing is critical in order to account for the visual properties of action-selective neurons.
We present a neural model for the visual processing of transient actions that is consistent with physiological data and that accomplishes recognition of grasping actions from real video stimuli. Shape recognition is accomplished by a view-dependent hierarchical neural architecture that retains some coarse position information on the highest level that can be exploited by subsequent stages. Additionally, simple recurrent neural circuits integrate effector information over time and realize selectivity for temporal sequences. A novel mechanism combines information about the shape and position of object and effector in an object-centered frame of reference. Action-selective model neurons defined in such a relative reference frame are tuned to learned associations between object and effector shapes, as well as their relative position and motion. 
We demonstrate that this model reproduces a variety of electrophysiological findings on the visual properties of action-selective neurons in the superior temporal sulcus, and of mirror neurons in area F5. Specifically, the model accounts for the fact that a majority of mirror neurons in area F5 show view dependence. The model predicts a number of electrophysiological results, which partially could be confirmed in recent experiments.
We conclude that the tuning of action-selective neurons given visual stimuli can be accounted for by well-established, predominantly visual neural processes rather than internal motor simulations.

METHODS: The shape recognition relies on a hierarchy of feature detectors of increasing complexity and invariance [1]. The mid-level features are learned from sequences of gray-level images depicting segmented views of hand and object shapes. The highest hierarchy level consists of detector populations for complete shapes with a coarse spatial resolution of approximately 3.7°. Additionally, effector shapes are integrated over time by asymmetric lateral connections between shape detectors using a neural field approach [2]. These model neurons thus encode actions such as hand opening or closing for particular grip types. 
We exploit gain field mechanism in order to implement the central coordinate transformation of the shape representations to an object-centered reference frame [3]. Typical effector-object-interactions correspond to activity regions in such a relative reference frame and are learned from training examples. Similarly, simple motion-energy detectors are applied in the object-centered reference frame and encode relative motion. The properties of transitive action neurons are modeled as a multiplicative combination of relative shape and motion detectors.

RESULTS: The model performance was tested on a set of 160 unsegmented sequences of hand grasping or placing actions performed on objects of different sizes, using different grip types and views. Hand actions and objects could be reliably recognized despite their mutual occlusions. Detectors on the highest level showed correct action tuning in more than 95% of the examples and generalized to untrained views. 
Furthermore, the model replicates a number of electrophysiological as well as imaging experiments on action-selective neurons, such as their particular selectivity for transitive actions compared to mimicked actions, the invariance to stimulus position, and their view-dependence. In particular, using the same stimulus set the model nicely fits neural data from a recent electrophysiological experiment that confirmed sequence selectivity in mirror neurons in area F5, as was predicted before by the model.

References
[1] Serre, T. et al. (2007): IEEE Pattern Anal. Mach. Int. 29, 411-426.
[2] Giese, A.M. and Poggio, T. (2003): Nat. Rev. Neurosci. 4, 179-192.
[3] Deneve, S. and Pouget, A. (2003). Neuron 37: 347-359.
&#xa

    It was (not) me: Causal Inference of Agency in goal-directed actions

    Get PDF
    Summary: 
The perception of one’s own actions depends on both sensory information and predictions derived from internal forward models [1]. The integration of these information sources depends critically on whether perceptual consequences are associated with one’s own action (sense of agency) or with changes in the external world that are not related to the action. The perceived effects of actions should thus critically depend on the consistency between the predicted and the actual sensory consequences of actions. To test this idea, we used a virtual-reality setup to manipulate the consistency between pointing movements and their visual consequences and investigated the influence of this manipulation on self-action perception. We then asked whether a Bayesian causal inference model, which assumes a latent agency variable controlling the attributed influence of the own action on perceptual consequences [2,3], would account for the empirical data: if the percept was attributed to the own action, visual and internal information should fuse in a Bayesian optimal manner, while this should not be the case if the visual stimulus was attributed to external influences. The model correctly fits the data, showing that small deviations between predicted and actual sensory information were still attributed to one’s own action, while this was not the case for large deviations when subjects relied more on internal information. We discuss the performance of this causal inference model in comparison to alternative biologically feasible statistical models applying methods for Bayesian model comparison.

Experiment: 
Participants were seated in front of a horizontal board on which their right hand was placed with the index finger on a haptic marker, representing the starting point for each trial. Participants were instructed to execute straight, fast (quasi-ballistic) pointing movements of fixed amplitude, but without an explicit visual target. The hand was obstructed from the view of the participants, and visual feedback about the peripheral part of the movement was provided by a cursor. Feedback was either veridical or rotated against the true direction of the hand movement by predefined angles. After each trial participants were asked to report the subjectively experienced direction of the executed hand movement by placing a mouse-cursor into that direction.

Model: 
We compared two probabilistic models: Both include a binary random gating variable (agency) that models the sense of ‘agency’; that is the belief that the visual feedback is influenced by the subject’s motor action. The first model assumes that both the visual feedback xv and the internal motor state estimate xe are directly caused by the (unobserved) real motor state xt (Fig. 1). The second model assumes instead that the expected visual feedback depends on the perceived direction of the own motor action xe (Fig. 2). 
Results: Both models are in good agreement with the data. Fig. A shows the model fit for Model 1 superpositioned to the data from a single subject. Fig. B shows the belief that the visual stimulus was influenced by the own action, which decreases for large deviations between predicted and real visual feedback. Bayesian model comparison shows a better fit for model 1.
Citations
[1] Wolpert D.M, Ghahramani, Z, Jordan, M. (1995) Science, 269, 1880-1882.
[2] Körding KP, Beierholm E, Ma WJ, Quartz S, Tenenbaum JB, et al (2007) PLoS ONE 2(9): e943.
[3] Shams, L., Beierholm, U. (2010) TiCS, 14: 425-432.
Acknowledgements
This work was supported by the BCCN Tübingen (FKZ: 01GQ1002), the CIN Tübingen, the European Union (FP7-ICT-215866 project SEARISE), the DFG and the Hermann and Lilly Schilling Foundation

    Biologically Plausible Neural Circuits for Realization of Maximum Operations

    Get PDF
    Object recognition in the visual cortex is based on a hierarchical architecture, in which specialized brain regions along the ventral pathway extract object features of increasing levels of complexity, accompanied by greater invariance in stimulus size, position, and orientation. Recent theoretical studies postulate a non-linear pooling function, such as the maximum (MAX) operation could be fundamental in achieving such invariance. In this paper, we are concerned with neurally plausible mechanisms that may be involved in realizing the MAX operation. Four canonical circuits are proposed, each based on neural mechanisms that have been previously discussed in the context of cortical processing. Through simulations and mathematical analysis, we examine the relative performance and robustness of these mechanisms. We derive experimentally verifiable predictions for each circuit and discuss their respective physiological considerations

    Effects of aging on identifying emotions conveyed by point-light walkers

    Get PDF
    M.G. was supported by EC FP7 HBP (grant 604102), PITN-GA-011-290011 (ABC) FP7-ICT-2013-10/ 611909 (KOROIBOT), and by GI 305/4-1 and KA 1258/15-1, and BMBF, FKZ: 01GQ1002A. K.S.P. was supported by a BBSRC New Investigator Grant. A.B.S. and P.J.B. were supported by an operating grant (528206) from the Canadian Institutes for Health Research. The authors also thank Donna Waxman for her valuable help in data collection for all experiments described here.Peer reviewedPostprin

    Motor expertise facilitates the accuracy of state extrapolation in perception

    Get PDF
    Ludolph N, Plöger J, Giese MA, Ilg W. Motor expertise facilitates the accuracy of state extrapolation in perception. PLOS ONE. 2017;12(11): e0187666

    Barriers and Challenges for Visually Impaired Students in PE - An Interview Study With Students in Austria, Germany, and the USA

    Get PDF
    Physical education (PE) is an important part of school education worldwide, and at the same time, almost the only subject that explicitly deals with body and movement. PE is therefore of elementary importance in the upbringing of young people. This also applies to children with visual impairments. However, existing findings on participation and belonging in PE as well as on physical and motor development reveal that this group of children and adolescents is noticeably disadvantaged in this respect. Against this background, this paper aims to explore fundamental barriers and challenges across different types of schools, types of schooling, and countries from the perspective of visually impaired children. The qualitative interview study with 22 children with visual impairments at different types of schools in three countries (Austria, Germany, USA) reveals that none of the respondents could escape the power of social distinctions and related problematic and existing hierarchies. Hence, ideas of normality and associated values remain the main challenge for all of them. However, the type-forming analysis provides important insight across settings on how visually impaired children differ on this, allowing for greater sensitivity to the concerns of children with visual impairments

    ...And After That Came Me . Subjective Constructions of Social Hierarchy in Physical Education Classes Among Youth with Visual Impairments in Germany

    Get PDF
    The aim of this study was to reconstruct subjective constructions of experiences in PE and feelings of being valued within PE classes in Germany by students with visual impairment (VI). Two female and two male students (average age: 19.25 years) participated in the study from the upper level. For the reconstruction of experiences of feeling valued, episodic interviews with a semi-structured interview guide were used. The data analysis was conducted with MAXQDA 2020 based on content-related structuring of qualitative text analysis with deductive-inductive category formation. To structure the analysis, the main category, feelings of being valued, was defined by two poles (positive feelings of being valued as opposed to bullying). As a main finding, respondents primarily reported negative feelings and experiences characterized by instances of bullying, discrimination, and physical and social isolation, perpetuated by both their peers and teachers. In search of a deeper understanding, we identified social hierarchy as an underlying structure determining the students\u27 perceived positioning within the social context and thus directing their feelings of being (de-)valued. It became evident that it is not the setting per se that determined social hierarchy, but that it is more about the concrete manifestation of social hierarchy

    Modeling of predictive human movement coordination patterns for applications in computer graphics

    Get PDF
    The planning of human body movements is highly predictive. Within a sequence of actions, the anticipation of a final task goal modulates the individual actions within the overall pattern of motion. An example is a sequence of steps, which is coordinated with the grasping of an object at the end of the step sequence. Opposed to this property of natural human movements, real-time animation systems in computer graphics often model complex activities by a sequential concatenation of individual pre-stored movements, where only the movement before accomplishing the goal is adapted. We present a learning-based technique that models the highly adaptive predictive movement coordination in humans, illustrated for the example of the coordination of walking and reaching. The proposed system for the real-time synthesis of human movements models complex activities by a sequential concatenation of movements, which are approximated by the superposition of kinematic primitives that have been learned from trajectory data by anechoic demixing, using a step-wise regression approach. The kinematic primitives are then approximated by stable solutions of nonlinear dynamical systems (dynamic primitives) that can be embedded in control architectures. We present a control architecture that generates highly adaptive predictive full-body movements for reaching while walking with highly human-like appearance. We demonstrate that the generated behavior is highly robust, even in presence of strong perturbations that require the insertion of additional steps online in order to accomplish the desired task

    Neural theory for the perception of causal actions

    Get PDF
    The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.Seventh Framework Programme (European Commission) (Tango Grant FP7-249858-TP3 and AMARSi Grant FP7-ICT- 248311)Deutsche Forschungsgemeinschaft (Grant GI 305/4-1)Hermann and Lilly Schilling Foundation for Medical Researc

    Neural and Computational Mechanisms of Action Processing: Interaction between Visual and Motor Representations

    Get PDF
    Giese M A, Rizzolatti G. Neural and Computational Mechanisms of Action Processing: Interaction between Visual and Motor Representations. Neuron. 2015;88(1):167-180
    corecore